Gradient methods for minimizing composite objective function
نویسندگان
چکیده
In this paper we analyze several new methods for solving optimization problems with the objective function formed as a sum of two convex terms: one is smooth and given by a black-box oracle, and another is general but simple and its structure is known. Despite to the bad properties of the sum, such problems, both in convex and nonconvex cases, can be solved with efficiency typical for the good part of the objective. For convex problems of the above structure, we consider primal and dual variants of the gradient method (converge as O ( 1 k ) ), and an accelerated multistep version with convergence rate O ( 1 k2 ) , where k is the iteration counter. For all methods, we suggest some efficient “line search” procedures and show that the additional computational work necessary for estimating the unknown problem class parameters can only multiply the complexity of each iteration by a small constant factor. We present also the results of preliminary computational experiments, which confirm the superiority of the accelerated scheme.
منابع مشابه
A fast dual proximal gradient algorithm for convex minimization and applications
We consider the convex composite problem of minimizing the sum of a strongly convex function and a general extended valued convex function. We present a dual-based proximal gradient scheme for solving this problem. We show that although the rate of convergence of the dual objective function sequence converges to the optimal value with the rate O(1/k2), the rate of convergence of the primal sequ...
متن کاملComplexity bounds for primal-dual methods minimizing the model of objective function
We provide Frank-Wolfe (≡ Conditional Gradients) method with a convergence analysis allowing to approach a primal-dual solution of convex optimization problem with composite objective function. Additional properties of complementary part of the objective (strong convexity) significantly accelerate the scheme. We also justify a new variant of this method, which can be seen as a trust-region sche...
متن کاملIncremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
Majorization-minimization algorithms consist of successively minimizing a sequence of upper bounds of the objective function. These upper bounds are tight at the current estimate, and each iteration monotonically drives the objective function downhill. Such a simple principle is widely applicable and has been very popular in various scientific fields, especially in signal processing and statist...
متن کاملAccelerated gradient sliding for structured convex optimization
Our main goal in this paper is to show that one can skip gradient computations for gradient descent type methods applied to certain structured convex programming (CP) problems. To this end, we first present an accelerated gradient sliding (AGS) method for minimizing the summation of two smooth convex functions with different Lipschitz constants. We show that the AGS method can skip the gradient...
متن کاملAn Adaptive Accelerated Proximal Gradient Method and its Homotopy Continuation for Sparse Optimization
We first propose an adaptive accelerated proximal gradient (APG) method for minimizing strongly convex composite functions with unknown convexity parameters. This method incorporates a restarting scheme to automatically estimate the strong convexity parameter and achieves a nearly optimal iteration complexity. Then we consider the l1regularized least-squares (l1-LS) problem in the high-dimensio...
متن کامل